What 81,000 People Want from AI at Work – Lessons for Program Leaders
AI in business
Anthropic just published the results of the largest qualitative AI study ever conducted. Over one week in December 2025, more than 80,000 Claude users across 159 countries answered one central question: what do you actually want from AI?
The findings are worth reading slowly. Not because of the scale, though 80,000 interviews in 70 languages is remarkable, but because of what people said. And what they didn't.
The number one ask wasn't efficiency. It was time.
When people described their ideal version of AI, the most common answer was professional excellence: AI handling routine tasks so they could focus on higher-value work. Nearly one in five respondents said this.
But when the interviewer asked follow-up questions, what would that actually give you? the answers shifted. People weren't ultimately asking for faster output. They were asking for time back. Time with family. Time to think. Time to be present in the parts of life that matter.
"With AI support I can now leave work on time to pick up my kids from school." – Software engineer, Mexico
That's a more honest picture of what people want from AI at work. Not AI for AI's sake. AI that buys back cognitive and physical presence.
For program leaders, this is worth sitting with. The question isn't just which tasks can AI automate? It's what would your teams do with the time if coordination overhead went away?
The tension that doesn't resolve: productivity and the treadmill
Half of all respondents cited time-saving as a concrete benefit they'd already experienced from AI. That's the highest of any category.
And yet nearly one in five had a different experience: more output expected of them, faster. The treadmill sped up.
"The ratio of my work time to rest time hasn't changed at all. You just have to run faster and faster to stay in place." – Freelance software engineer, France
This isn't an AI problem. It's an organizational structure problem. When AI improves individual throughput without changing how work is coordinated, the gains get absorbed by the system. Teams produce more, but the coordination overhead, the status updates, the alignment meetings, the chasing of decisions , stays constant or grows.
That's the trap. Productivity tools at the individual level, without execution visibility at the program level, just mean more work flowing into the same broken coordination layer.
81% said AI had already moved them toward their vision. But 19% said it hadn't.
The study asked whether AI had ever delivered on what people hoped for. 81% said yes.
The most common forms of delivery were productivity (32%), cognitive partnership (17%), and learning (10%). People described AI as a thinking partner, a patient tutor, a tireless colleague, always available, never bored, free of judgment.
But nearly one in five said AI had fallen short. The primary reason: unreliability. Hallucinations. Confident wrong answers. The cost of verification often exceeding the benefit of the assistance.
"An assistant that sounds sure but is often wrong forces you to treat everything as suspect. Instead of freeing attention, it creates a permanent 'fact-check tax.'" – United States
This is a real risk in program management contexts. AI-generated status reports, AI-drafted risk summaries, AI-synthesized decision logs, all of these are only valuable if the underlying information is trustworthy and current. If the source data is scattered, stale, or fragmented across tools, AI adds noise, not signal.
The lesson isn't to avoid AI in execution contexts. It's that AI needs a solid coordination layer underneath it to work well.
Cognitive atrophy is the concern professionals aren't saying out loud
The study surfaced a fear that shows up quietly in executive conversations but rarely makes it into AI strategy discussions: that AI use is eroding the thinking habits that made people good at their jobs.
17% of respondents worried about this. But among educators and academics, the rate was significantly higher — and they were reporting it not as a personal fear, but as something they were already observing.
"I don't think as much as I used to. I struggle to put the ideas I do have into words." – Heavy AI user, United States
For program leaders, this raises a real question about how AI gets used in practice. There's a difference between AI that helps you think, structuring an argument, stress-testing a plan, surfacing a blind spot, and AI that replaces thinking altogether. The former builds judgment over time. The latter may quietly erode it.
The best use of AI in program management isn't automating the thinking. It's automating the coordination overhead that currently prevents thinking from happening at all.
The regional divide worth knowing about
The study found a clear pattern: people in lower and middle income countries were more optimistic about AI than those in Western Europe and North America. And the gap wasn't small.
The strongest predictor of AI sentiment, globally, was concern about economic displacement. Where that concern was lower, because AI hadn't yet visibly disrupted local labor markets, optimism was higher.
This matters for program leaders running cross-functional or multinational programs. The teams in your program may have very different relationships with AI adoption. Not because some are more sophisticated than others, but because the risk calculus is genuinely different depending on where people sit.
Alignment isn't just about shared tools. It's about shared context. And AI adoption is changing the context in different ways for different parts of your organization.
What this means for cross-functional program execution
The Anthropic study wasn't designed with program managers in mind. But read through that lens, a few things stand out.
People want AI to reduce cognitive load, not add to it. The most valued experiences were AI that took burdens away, drafting, summarizing, synthesizing, organizing. The frustrating experiences were AI that created new work: verifying outputs, correcting hallucinations, re-doing what AI got wrong. In program management, this means AI is most valuable when it works with clean, structured execution data. It creates the most friction when it's asked to make sense of fragmented, inconsistent information.
Individual productivity gains don't automatically translate to program visibility. This is the core insight from the "treadmill" finding. Teams can become more productive individually, shipping faster, writing more, communicating more, while the program-level view gets harder to maintain, not easier. Execution drift doesn't care how fast individual contributors are moving. It cares whether the program has a shared, current view of decisions, risks, and progress.
Judgment still matters, maybe more than before. The study found that people in high-stakes roles, lawyers, healthcare workers, finance professionals, were simultaneously the most enthusiastic about AI-assisted decision-making and the most burned by its failures. The pattern for program leaders is similar. AI can help you process more information, faster. But the judgment about what the information means, and what to do about it, still belongs to you.
The question to ask your team
The Anthropic study ended with a reflection worth borrowing: most of what people wanted from AI wasn't about working faster. It was about living better. More present. More human.
That's also true at the program level. The goal of better execution isn't more output. It's more clarity, less noise, fewer surprises, and the ability to make good decisions without running on fumes.
If AI in your programs is creating more coordination overhead than it's removing, that's a signal worth acting on.
The gap between plan and reality doesn't close by working harder. It closes by having a clearer shared view of where reality actually is, and building the routines to keep that view current.
Interested in how program teams build that shared view? See how In Parallel works →
Or book a demo to see it in practice.
Source: @online{huang2026interviewer,
author = {Saffron Huang and Shan Carter and Jake Eaton and Sarah Pollack and Dexter Callender III and Nikki Makagiansar and Maria Gonzalez and Sylvie Carr and Jerry Hong and Kunal Handa and Miles McCain and Thomas Millar and Mo Julapalli and Grace Yun and AJ Alt and Chelsea Larsson and Jane Leibrock and Matt Gallivan and Theodore Sumers and Esin Durmus and Matt Kearney and Judy Hanwen Shen and Jack Clark and Michael Stern and Deep Ganguli},
title = {What 81,000 People Want from AI},
date = {2026-03-18},
year = {2026},
url = {https://anthropic.com/features/81k-interviews},
}
Get insights like this in your inbox
One email per week on execution intelligence, team coordination, and enterprise AI. No fluff.
By subscribing you agree to our Privacy Policy. Unsubscribe anytime.